12 research outputs found
Transfer Learning for OCRopus Model Training on Early Printed Books
A method is presented that significantly reduces the character error rates
for OCR text obtained from OCRopus models trained on early printed books when
only small amounts of diplomatic transcriptions are available. This is achieved
by building from already existing models during training instead of starting
from scratch. To overcome the discrepancies between the set of characters of
the pretrained model and the additional ground truth the OCRopus code is
adapted to allow for alphabet expansion or reduction. The character set is now
capable of flexibly adding and deleting characters from the pretrained alphabet
when an existing model is loaded. For our experiments we use a self-trained
mixed model on early Latin prints and the two standard OCRopus models on modern
English and German Fraktur texts. The evaluation on seven early printed books
showed that training from the Latin mixed model reduces the average amount of
errors by 43% and 26%, respectively compared to training from scratch with 60
and 150 lines of ground truth, respectively. Furthermore, it is shown that even
building from mixed models trained on data unrelated to the newly added
training and test data can lead to significantly improved recognition results
Profiling of OCR'ed Historical Texts Revisited
In the absence of ground truth it is not possible to automatically determine
the exact spectrum and occurrences of OCR errors in an OCR'ed text. Yet, for
interactive postcorrection of OCR'ed historical printings it is extremely
useful to have a statistical profile available that provides an estimate of
error classes with associated frequencies, and that points to conjectured
errors and suspicious tokens. The method introduced in Reffle (2013) computes
such a profile, combining lexica, pattern sets and advanced matching techniques
in a specialized Expectation Maximization (EM) procedure. Here we improve this
method in three respects: First, the method in Reffle (2013) is not adaptive:
user feedback obtained by actual postcorrection steps cannot be used to compute
refined profiles. We introduce a variant of the method that is open for
adaptivity, taking correction steps of the user into account. This leads to
higher precision with respect to recognition of erroneous OCR tokens. Second,
during postcorrection often new historical patterns are found. We show that
adding new historical patterns to the linguistic background resources leads to
a second kind of improvement, enabling even higher precision by telling
historical spellings apart from OCR errors. Third, the method in Reffle (2013)
does not make any active use of tokens that cannot be interpreted in the
underlying channel model. We show that adding these uninterpretable tokens to
the set of conjectured errors leads to a significant improvement of the recall
for error detection, at the same time improving precision
LatMor: A Latin Finite-State Morphology Encoding Vowel Quantity
We present the first large-coverage finite-state open-source morphology for Latin (called LatMor) which parses as well as generates vowel quantity information. LatMor is based on the Berlin Latin Lexicon comprising about 70,000 lemmata of classical Latin compiled by the group of Dietmar Najock in their work on concordances of Latin authors (see Rapsch and Najock, 1991) which was recently updated by us. Compared to the well-known Morpheus system of Crane (1991, 1998), which is written in the C programming language, based on 50,000 lemmata of Lewis and Short (1907), not well documented and therefore not easily extended, our new morphology has a larger vocabulary, is about 60 to 1200 times faster and is built in the form of finite-state transducers which can analyze as well as generate wordforms and represent the state-of-the-art implementation method in computational morphology. The current coverage of LatMor is evaluated against Morpheus and other existing systems (some of which are not openly accessible), and is shown to rank first among all systems together with the Pisa LEMLAT morphology (not yet openly accessible). Recall has been analyzed taking the Latin Dependency Treebank(1) as gold data and the remaining defect classes have been identified. LatMor is available under an open source licence to allow its wide usage by all interested parties
State of the Art Optical Character Recognition of 19th Century Fraktur Scripts using Open Source Engines
In this paper we evaluate Optical Character Recognition (OCR) of 19th century
Fraktur scripts without book-specific training using mixed models, i.e. models
trained to recognize a variety of fonts and typesets from previously unseen
sources. We describe the training process leading to strong mixed OCR models
and compare them to freely available models of the popular open source engines
OCRopus and Tesseract as well as the commercial state of the art system ABBYY.
For evaluation, we use a varied collection of unseen data from books, journals,
and a dictionary from the 19th century. The experiments show that training
mixed models with real data is superior to training with synthetic data and
that the novel OCR engine Calamari outperforms the other engines considerably,
on average reducing ABBYYs character error rate (CER) by over 70%, resulting in
an average CER below 1%.Comment: Submitted to DHd 2019 (https://dhd2019.org/) which demands a...
creative... submission format. Consequently, some captions might look weird
and some links aren't clickable. Extended version with more technical details
and some fixes to follo
CIS OCR Workshop v1.0: OCR and postcorrection of early printings for digital humanities
The 2-day CIS OCR Workshop on "OCR and postcorrection of early printings for digital humanities" originally held at LMU, Munich 14/15 September 2015 (see http://www.cis.lmu.de/ocrworkshop).
Release date: 2016-02-25
CIS OCR Workshop by Uwe Springmann, Florian Fink is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License